318 research outputs found

    Reversible watermarking scheme with image-independent embedding capacity

    Get PDF
    Permanent distortion is one of the main drawbacks of all the irreversible watermarking schemes. Attempts to recover the original signal after the signal passing the authentication process are being made starting just a few years ago. Some common problems, such as salt-and-pepper artefacts owing to intensity wraparound and low embedding capacity, can now be resolved. However, some significant problems remain unsolved. First, the embedding capacity is signal-dependent, i.e., capacity varies significantly depending on the nature of the host signal. The direct impact of this is compromised security for signals with low capacity. Some signals may be even non-embeddable. Secondly, while seriously tackled in irreversible watermarking schemes, the well-known problem of block-wise dependence, which opens a security gap for the vector quantisation attack and transplantation attack, are not addressed by researchers of the reversible schemes. This work proposes a reversible watermarking scheme with near-constant signal-independent embedding capacity and immunity to the vector quantisation attack and transplantation attack

    The Virtue of Patience when Scheduling Media in Presence of Feedback

    Get PDF
    We consider streaming of pre-encoded and packetized media over best-effort networks in presence of acknowledgment feedback. Given an estimation of future transmission resources and knowing about past transmissions and received acknowledgments, a scheduling algorithm is defined as a mechanism that selects the data to send over the network at any given time, so as to minimize the end-to-end distortion. Our work first reveals the suboptimality of popular greedy schedulers, which might be strongly penalized by anticipated retransmissions. It then proposes an original scheduling algorithm that avoids premature retransmissions, while preserving the simplicity of the greedy paradigm. The proposed patient greedy (PG) scheduler appears to save up to 50% of rate in comparison with the conventional greedy approach

    Packetized Media Streaming with Comprehensive Exploitation of Feedback Information

    Get PDF
    This paper addresses the problem of streaming packetized media over a lossy packet network, with sender-driven (re)transmission using acknowledgement feedback. The different transmission scenarios associated to a group of interdependent media data units are abstracted in terms of a finite alphabet of policies, for each single data unit. A rate-distortion optimized markovian framework is proposed, which supports the use of comprehensive feedback information. Contrarily to previous works in rate-distortion optimized streaming, whose transmission policies definitions do not take into account the feedback expected for other data units, our framework considers all the acknowledgment packets in defining the streaming policy of a single data unit. More specifically, the notion of master and slave data unit is introduced, to define dependent streaming policies between media packets; the policy adopted to transmit a slave data unit becomes dependent on the acknowledgments received about its masters. One of the main contributions of our work is to propose a methodology that limits the space of dependent policies for the RD optimized streaming strategy. A number of rules are formulated to select a set of relevant master/slave relationships, defined as the dependencies that are likely to bring RD performance gain in the streaming system. These rules provide a limited complexity solution to the rate-distortion optimized streaming problem, with comprehensive use of feedback information. Based on extensive simulations, we conclude that (i) the proposed set of relevant dependent policies achieves close to optimal performance, while being computationally tractable, and (ii) the benefit of dependent policies is driven by the relative sizes and importance of interdependent data units. Our simulations demonstrate that dependent streaming policies can perform significantly better than independent streaming strategies, especially for cases where some media data units bring a relatively large gain in distortion, in comparison with other data units they depend on for correct decoding. We observe however that the benefit becomes marginal when the gain in distortion per unit of rate decreases along the media decoding dependency path. Since such a trend characterizes most conventional scalable coders, the implementation of dependent policies can reasonably be ruled out in these specific cases

    Explicit window-based transport control protocols in lossy environments

    Get PDF
    This paper addresses efficient packet loss recovery by retransmission in window-based congestion control protocols. It builds on explicit congestion control mechanisms to decouple the packet loss detection from the congestion feedback signals. Implicit algorithms alternatively infer congestion from losses (which yields to window size reduction), and therefore do not allow to evaluate the performance of window-based transmission algorithms in lossy environments. We first propose a simple modification of TCP that offers the possibility for explicit congestion control. Different retransmission strategies applicable to window-based congestion control protocols are then discussed in the framework of explicit congestion control. We introduce a new early retransmission timer that significantly improves the error resiliency when combined with explicit congestion control. Extensive simulations then compare the error recovery mechanisms generally used in recent TCP implementations, and the new loss monitoring and recovery strategies, combined with explicit congestion control protocols. Performances are analyzed in a simple network topology where a bottleneck link is shared by loss-free, and respectively lossy connections. Retransmissions triggered by the proposed accurate loss monitoring mechanism are shown to end up in a fair share of the bottleneck bandwidth between all connections, even for high loss ratios and bursty loss processes. The link utilization is in the same time close to optimal. Explicit congestion control, combined with efficient error control strategies, can therefore provide a valid solution to reliable and controlled connections over lossy network infrastructures

    Coarse-to-fine textures retrieval in the JPEG 2000 compressed domain for fast browsing of large image databases

    Get PDF
    In many applications, the amount and resolution of digi- tal images have significantly increased over the past few years. For this reason, there is a growing interest for techniques allowing to efficiently browse and seek information inside such huge data spaces. JPEG 2000, the latest compression standard from the JPEG committee, has several interesting features to handle very large images. In this paper, these fea- tures are used in a coarse-to-fine approach to retrieve specific information in a JPEG 2000 code-stream while minimizing the computational load required by such processing. Practically, a cascade of classifiers exploits the bit-depth and resolution scalability features intrinsically present in JPEG 2000 to progressively refine the classification process. Comparison with existing techniques is made in a texture-retrieval task and shows the efficiency of such approach

    Dynamic stroma reorganization drives blood vessel dysmorphia during glioma growth

    Get PDF
    Glioma growth and progression are characterized by abundant development of blood vessels that are highly aberrant and poorly functional, with detrimental consequences for drug delivery efficacy. The mechanisms driving this vessel dysmorphia during tumor progression are poorly understood. Using longitudinal intravital imaging in a mouse glioma model, we identify that dynamic sprouting and functional morphogenesis of a highly branched vessel network characterize the initial tumor growth, dramatically changing to vessel expansion, leakage, and loss of branching complexity in the later stages. This vascular phenotype transition was accompanied by recruitment of predominantly pro-inflammatory M1-like macrophages in the early stages, followed by in situ repolarization to M2-like macrophages, which produced VEGF-A and relocate to perivascular areas. A similar enrichment and perivascular accumulation of M2 versus M1 macrophages correlated with vessel dilation and malignancy in human glioma samples of different WHO malignancy grade. Targeting macrophages using anti-CSF1 treatment restored normal blood vessel patterning and function. Combination treatment with chemotherapy showed survival benefit, suggesting that targeting macrophages as the key driver of blood vessel dysmorphia in glioma progression presents opportunities to improve efficacy of chemotherapeutic agents. We propose that vessel dysfunction is not simply a general feature of tumor vessel formation, but rather an emergent property resulting from a dynamic and functional reorganization of the tumor stroma and its angiogenic influences

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Global comparison of awake and asleep mapping procedures in glioma surgery: an international multicenter survey

    Get PDF
    Background Mapping techniques are frequently used to preserve neurological function during glioma surgery. There is, however, no consensus regarding the use of many variables of these techniques. Currently, there are almost no objective data available about potential heterogeneity between surgeons and centers. The goal of this survey is therefore to globally identify, evaluate and analyze the local mapping procedures in glioma surgery. Methods The survey was distributed to members of the neurosurgical societies of the Netherlands (Nederlandse Vereniging voor Neurochirurgie-NVVN), Europe (European Association of Neurosurgical Societies-EANS), and the United States (Congress of Neurological Surgeons-CNS) between December 2020 and January 2021 with questions about awake mapping, asleep mapping, assessment of neurological morbidity, and decision making. Results Survey responses were obtained from 212 neurosurgeons from 42 countries. Overall, significant differences were observed for equipment and its settings that are used for both awake and asleep mapping, intraoperative assessment of eloquent areas, the use of surgical adjuncts and monitoring, anesthesia management, assessment of neurological morbidity, and perioperative decision making. Academic practices performed awake and asleep mapping procedures more often and employed a clinical neurophysiologist with telemetric monitoring more frequently. European neurosurgeons differed from US neurosurgeons regarding the modality for cortical/subcortical mapping and awake/asleep mapping, the use of surgical adjuncts, and anesthesia management during awake mapping. Discussion This survey demonstrates the heterogeneity among surgeons and centers with respect to their procedures for awake mapping, asleep mapping, assessing neurological morbidity, and decision making in glioma patients. These data invite further evaluations for key variables that can be optimized and may therefore benefit from consensus
    • 

    corecore